<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/43ALKUL</identifier>
		<repository>sid.inpe.br/sibgrapi/2020/09.25.14.27</repository>
		<lastupdate>2020:09.25.14.27.50 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2020/09.25.14.27.50</metadatarepository>
		<metadatalastupdate>2022:06.14.00.00.07 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2020}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI51738.2020.00053</doi>
		<citationkey>Marcílio-JrEler:2020:AsSHVa</citationkey>
		<title>From explanations to feature selection: assessing SHAP values as feature selection mechanism</title>
		<format>On-line</format>
		<year>2020</year>
		<numberoffiles>1</numberoffiles>
		<size>508 KiB</size>
		<author>Marcílio-Jr, Wilson Estécio,</author>
		<author>Eler, Danilo Medeiros,</author>
		<affiliation>São Paulo State University (UNESP) - Department of Mathematics and Computer Science, Presidente Prudente-SP</affiliation>
		<affiliation>São Paulo State University (UNESP) - Department of Mathematics and Computer Science, Presidente Prudente-SP</affiliation>
		<editor>Musse, Soraia Raupp,</editor>
		<editor>Cesar Junior, Roberto Marcondes,</editor>
		<editor>Pelechano, Nuria,</editor>
		<editor>Wang, Zhangyang (Atlas),</editor>
		<e-mailaddress>wilson.marcilio@unesp.br</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)</conferencename>
		<conferencelocation>Porto de Galinhas (virtual)</conferencelocation>
		<date>7-10 Nov. 2020</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>feature selection, explainability.</keywords>
		<abstract>Explainability has become one of the most discussed topics in machine learning research in recent years, and although a lot of methodologies that try to provide explanations to blackbox models have been proposed to address such an issue, little discussion has been made on the pre-processing steps involving the pipeline of development of machine learning solutions, such as feature selection. In this work, we evaluate a game-theoretic approach used to explain the output of any machine learning model, SHAP, as a feature selection mechanism. In the experiments, we show that besides being able to explain the decisions of a model, it achieves better results than three commonly used feature selection algorithms.</abstract>
		<language>en</language>
		<targetfile>PID6618233.pdf</targetfile>
		<usergroup>wilson.marcilio@unesp.br</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/43G4L9S</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2020/10.28.20.46 3</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<username>wilson.marcilio@unesp.br</username>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2020/09.25.14.27</url>
	</metadata>
</metadatalist>